Goto

Collaborating Authors

 neural projection


Learning Physical Constraints with Neural Projections

Neural Information Processing Systems

We propose a new family of neural networks to predict the behaviors of physical systems by learning their underpinning constraints. A neural projection operator lies at the heart of our approach, composed of a lightweight network with an embedded recursive architecture that interactively enforces learned underpinning constraints and predicts the various governed behaviors of different physical systems. Our neural projection operator is motivated by the position-based dynamics model that has been used widely in game and visual effects industries to unify the various fast physics simulators. Our method can automatically and effectively uncover a broad range of constraints from observation point data, such as length, angle, bending, collision, boundary effects, and their arbitrary combinations, without any connectivity priors. We provide a multi-group point representation in conjunction with a configurable network connection mechanism to incorporate prior inputs for processing complex physical systems. We demonstrated the efficacy of our approach by learning a set of challenging physical systems all in a unified and simple fashion including: rigid bodies with complex geometries, ropes with varying length and bending, articulated soft and rigid bodies, and multi-object collisions with complex boundaries.


ENFORCE: Exact Nonlinear Constrained Learning with Adaptive-depth Neural Projection

Lastrucci, Giacomo, Schweidtmann, Artur M.

arXiv.org Artificial Intelligence

Ensuring neural networks adhere to domain-specific constraints is crucial for addressing safety and ethical concerns while also enhancing prediction accuracy. Despite the nonlinear nature of most real-world tasks, existing methods are predominantly limited to affine or convex constraints. We introduce ENFORCE, a neural network architecture that guarantees predictions to satisfy nonlinear constraints exactly. ENFORCE is trained with standard unconstrained gradient-based optimizers (e.g., Adam) and leverages autodifferentiation and local neural projections to enforce any $\mathcal{C}^1$ constraint to arbitrary tolerance $\epsilon$. We build an adaptive-depth neural projection (AdaNP) module that dynamically adjusts its complexity to suit the specific problem and the required tolerance levels. ENFORCE guarantees satisfaction of equality constraints that are nonlinear in both inputs and outputs of the neural network with minimal (and adjustable) computational cost.


Review for NeurIPS paper: Learning Physical Constraints with Neural Projections

Neural Information Processing Systems

Weaknesses: From the results, it seems that separate models are trained for each of the systems, but within each system, there is not that much procedural generation in the structure/relative distances of the system particles, just on the initial positions/velocities. Do you expect this to work in cases where there is more procedural variation in the relative positioning of the points (e.g. if sometimes the rigid it is a square, sometimes an arbitrary trapezoid). I guess this could work if you added another input to C with the previous state, or with some reference distances, but it does not seem it would work on the current form of the model, since it would be impossible for the C function to tell whether the constraints are being satisfied or not, just by looking at the position of the points, since it does not know if the constraints to be satisfied should be that of a square or that of a specific trapezoid. Similarly, I wonder how much of the context of the other particles the model is relying on to infer how systems should collide against a wall. For example, if we were making predictions for a system with a single particle that in a single timestep would bounce elastically of a wall, I wonder if the system would always put the particle right at the wall (the linear prediction would move it past the wall, and the constraint satisfaction would put it back right at the wall, where the constraint is satisfied with minimal displacement, but would not make it bounce back off the wall. Then in the next step the linear extrapolation, would essentially do the same thing once more, and then beyond that the particle would become permanently stuck at the wall once two consecutive positions place it at the wall).


Learning Physical Constraints with Neural Projections

Neural Information Processing Systems

We propose a new family of neural networks to predict the behaviors of physical systems by learning their underpinning constraints. A neural projection operator lies at the heart of our approach, composed of a lightweight network with an embedded recursive architecture that interactively enforces learned underpinning constraints and predicts the various governed behaviors of different physical systems. Our neural projection operator is motivated by the position-based dynamics model that has been used widely in game and visual effects industries to unify the various fast physics simulators. Our method can automatically and effectively uncover a broad range of constraints from observation point data, such as length, angle, bending, collision, boundary effects, and their arbitrary combinations, without any connectivity priors. We provide a multi-group point representation in conjunction with a configurable network connection mechanism to incorporate prior inputs for processing complex physical systems.